Goto

Collaborating Authors

 Bioko Island


10 captivating images from National Geographic's Photo Ark

Popular Science

Since 2006, the project has photographed 17,000 species in the world's zoos, aquariums, and wildlife sanctuaries. Photographs from the Photo Ark will be featured in the inaugural exhibition at the National Geographic Museum of Exploration in Washington D.C. Breakthroughs, discoveries, and DIY tips sent every weekday. A picture is said to be worth a thousand words, but some photographs are worth 17,000. Well, 17,000 species, that is. For's Photo Ark project, photographer Joel Sartore is documenting all species living in the world's zoos, aquariums, and wildlife sanctuaries.




1,500th discovered bat species is a 'god of the island'

Popular Science

Environment Animals Wildlife Bats 1,500th discovered bat species is a'god of the island' What better way to kick off Bat Appreciation Month? Breakthroughs, discoveries, and DIY tips sent every weekday. It's official: the world's 1,500th known bat species has been discovered in Equatorial Guinea. And as luck would have it, 's announcement is just in time for Bat Appreciation Month . Still, biologists estimate that bats have existed for at least 55 to 56 million years .


A Maslow-Inspired Hierarchy of Engagement with AI Model

Ogot, Madara

arXiv.org Artificial Intelligence

The rapid proliferation of artificial intelligence (AI) across industry, government, and education highlights the urgent need for robust frameworks to conceptualise and guide engagement. This paper introduces the Hierarchy of Engagement with AI model, a novel maturity framework inspired by Maslow's hierarchy of needs. The model conceptualises AI adoption as a progression through eight levels, beginning with initial exposure and basic understanding and culminating in ecosystem collaboration and societal impact. Each level integrates technical, organisational, and ethical dimensions, emphasising that AI maturity is not only a matter of infrastructure and capability but also of trust, governance, and responsibility. Initial validation of the model using four diverse case studies (General Motors, the Government of Estonia, the University of Texas System, and the African Union AI Strategy) demonstrate the model's contextual flexibility across various sectors. The model provides scholars with a framework for analysing AI maturity and offers practitioners and policymakers a diagnostic and strategic planning tool to guide responsible and sustainable AI engagement. The proposed model demonstrates that AI maturity progression is multi-dimensional, requiring technological capability, ethical integrity, organisational resilience, and ecosystem collaboration.


Not All Data Are Unlearned Equally

Krishnan, Aravind, Reddy, Siva, Mosbach, Marius

arXiv.org Artificial Intelligence

Machine unlearning is concerned with the task of removing knowledge learned from particular data points from a trained model. In the context of large language models (LLMs), unlearning has recently received increased attention, particularly for removing knowledge about named entities from models for privacy purposes. While various approaches have been proposed to address the unlearning problem, most existing approaches treat all data points to be unlearned equally, i.e., unlearning that Montreal is a city in Canada is treated exactly the same as unlearning the phone number of the first author of this paper. In this work, we show that this all data is equal assumption does not hold for LLM unlearning. We study how the success of unlearning depends on the frequency of the knowledge we want to unlearn in the pre-training data of a model and find that frequency strongly affects unlearning, i.e., more frequent knowledge is harder to unlearn. Additionally, we uncover a misalignment between probability and generation-based evaluations of unlearning and show that this problem worsens as models become larger. Overall, our experiments highlight the need for better evaluation practices and novel methods for LLM unlearning that take the training data of models into account.


Unveiling AI's Threats to Child Protection: Regulatory efforts to Criminalize AI-Generated CSAM and Emerging Children's Rights Violations

Kokolaki, Emmanouela, Fragopoulou, Paraskevi

arXiv.org Artificial Intelligence

This paper aims to present new alarming trends in the field of child sexual abuse through imagery, as part of SafeLine's research activities in the field of cybercrime, child sexual abuse material and the protection of children's rights to safe online experiences. It focuses primarily on the phenomenon of AI-generated CSAM, sophisticated ways employed for its production which are discussed in dark web forums and the crucial role that the open-source AI models play in the evolution of this overwhelming phenomenon. The paper's main contribution is a correlation analysis between the hotline's reports and domain names identified in dark web forums, where users' discussions focus on exchanging information specifically related to the generation of AI-CSAM. The objective was to reveal the close connection of clear net and dark web content, which was accomplished through the use of the ATLAS dataset of the Voyager system. Furthermore, through the analysis of a set of posts' content drilled from the above dataset, valuable conclusions on forum members' techniques employed for the production of AI-generated CSAM are also drawn, while users' views on this type of content and routes followed in order to overcome technological barriers set with the aim of preventing malicious purposes are also presented. As the ultimate contribution of this research, an overview of the current legislative developments in all country members of the INHOPE organization and the issues arising in the process of regulating the AI- CSAM is presented, shedding light in the legal challenges regarding the regulation and limitation of the phenomenon.


African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda

Okolo, Chinasa T.

arXiv.org Artificial Intelligence

In light of prominent discourse around the negative implications of generative AI, an emerging area of research is investigating the current and estimated impacts of AI-generated propaganda on African citizens participating in elections. Throughout Africa, there have already been suspected cases of AI-generated propaganda influencing electoral outcomes or precipitating coups in countries like Nigeria, Burkina Faso, and Gabon, underscoring the need for comprehensive research in this domain. This paper aims to highlight the risks associated with the spread of generative AI-driven disinformation within Africa while concurrently examining the roles of government, civil society, academia, and the general public in the responsible development, practical use, and robust governance of AI. To understand how African governments might effectively counteract the impact of AI-generated propaganda, this paper presents case studies illustrating the current usage of generative AI for election-related propaganda in Africa. Subsequently, this paper discusses efforts by fact-checking organisations to mitigate the negative impacts of disinformation, explores the potential for new initiatives to actively engage citizens in literacy efforts to combat disinformation spread, and advocates for increased governmental regulatory measures. Overall, this research seeks to increase comprehension of the potential ramifications of AI-generated propaganda on democratic processes within Africa and propose actionable strategies for stakeholders to address these multifaceted challenges.


Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering

Wen, Zhihua, Tian, Zhiliang, Jian, Zexin, Huang, Zhen, Ke, Pei, Gao, Yifu, Huang, Minlie, Li, Dongsheng

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are widely used for knowledge-seeking yet suffer from hallucinations. The knowledge boundary (KB) of an LLM limits its factual understanding, beyond which it may begin to hallucinate. Investigating the perception of LLMs' KB is crucial for detecting hallucinations and LLMs' reliable generation. Current studies perceive LLMs' KB on questions with a concrete answer (close-ended questions) while paying limited attention to semi-open-ended questions (SoeQ) that correspond to many potential answers. Some researchers achieve it by judging whether the question is answerable or not. However, this paradigm is unsuitable for SoeQ, which are usually partially answerable, containing both answerable and ambiguous (unanswerable) answers. Ambiguous answers are essential for knowledge-seeking, but they may go beyond the KB of LLMs. In this paper, we perceive the LLMs' KB with SoeQ by discovering more ambiguous answers. First, we apply an LLM-based approach to construct SoeQ and obtain answers from a target LLM. Unfortunately, the output probabilities of mainstream black-box LLMs are inaccessible to sample for low-probability ambiguous answers. Therefore, we apply an open-sourced auxiliary model to explore ambiguous answers for the target LLM. We calculate the nearest semantic representation for existing answers to estimate their probabilities, with which we reduce the generation probability of high-probability answers to achieve a more effective generation. Finally, we compare the results from the RAG-based evaluation and LLM self-evaluation to categorize four types of ambiguous answers that are beyond the KB of the target LLM. Following our method, we construct a dataset to perceive the KB for GPT-4. We find that GPT-4 performs poorly on SoeQ and is often unaware of its KB. Besides, our auxiliary model, LLaMA-2-13B, is effective in discovering more ambiguous answers.


Machine Intelligence in Africa: a survey

Tapo, Allahsera Auguste, Traore, Ali, Danioko, Sidy, Tembine, Hamidou

arXiv.org Artificial Intelligence

In the last 5 years, the availability of large audio datasets in African countries has opened unlimited opportunities to build machine intelligence (MI) technologies that are closer to the people and speak, learn, understand, and do businesses in local languages, including for those who cannot read and write. Unfortunately, these audio datasets are not fully exploited by current MI tools, leaving several Africans out of MI business opportunities. Additionally, many state-of-the-art MI models are not culture-aware, and the ethics of their adoption indexes are questionable. The lack thereof is a major drawback in many applications in Africa. This paper summarizes recent developments in machine intelligence in Africa from a multi-layer multiscale and culture-aware ethics perspective, showcasing MI use cases in 54 African countries through 400 articles on MI research, industry, government actions, as well as uses in art, music, the informal economy, and small businesses in Africa. The survey also opens discussions on the reliability of MI rankings and indexes in the African continent as well as algorithmic definitions of unclear terms used in MI.